15. Design of Multidimensional KF
Design of Multi-Dimensional Kalman Filters
From this point forward we will transition to using linear algebra, as it allows us to easily work with multi-dimensional problems. To begin with, let’s write the state prediction in linear algebra form.
State Transition
The formula below is the state transition function that advances the state from time t to time t + 1 . It is just the relationship between the robot’s position, x , and velocity, \dot{x} . Here, we will assume that the robot’s velocity is not changing.
We can express the same relationship in matrix form, as seen below. On the left, is the posterior state (denoted with the prime symbol, ' ), and on the right are the state transition function and the prior state. This equation shows how the state changes over the time period, \Delta t . Note that we are only working with the means here; the covariance matrix will appear later.
The State Transition Function is denoted F , and the formula can be written as so,
In reality, the equation should also account for process noise, as its own term in the equation. However, process noise is a Gaussian with a mean of 0, so the update equation for the mean need not include it.
Now, what happens to the covariance? How does it change in this process?
Sidenote: While it is common to use \Sigma to represent the covariance of a Gaussian distribution in mathematics, it is more common to use the letter P to represent the state covariance in localization.
If you multiply the state, x , by F , then the covariance will be affected by the square of F . In matrix form, this will look like so:
However, your intuition may suggest that it should be affected by more than just the state transition function. For instance, additional uncertainty may arise from the prediction itself. If so, you’re correct!
To calculate the posterior covariance, the prior covariance is multiplied by the state transition function squared, and Q added as an increase of uncertainty due to process noise. Q can account for a robot slowing down unexpectedly, or being drawn off course by an external influence.
Now we’ve updated the mean and the covariance as part of the state prediction.
Quiz 1
Now that you've seen how a simple state transition function is created, let's see if you can construct a more complicated one for the following problem:
You are tracking the position and velocity of a robot in two dimensions, x and y. The state is represented as so,
Find the state update function, F, that will advance the state from time t to time t + 1 based on the state transition equation below.
Try to work through this on paper before looking at the quiz options below.
SOLUTION:
Quiz 2
You are tracking the position, velocity, and acceleration of a quadrotor in the vertical dimension, z. The state of the quadrotor can be represented as so,
Find the state update function, F, that will advance the state from time t to time t + 1 based on the state transition equation below.
SOLUTION:
Measurement Update
Next, we move onto the measurement update step. If we return to our original example, where we were tracking the position and velocity of a robot in the x-dimension, the robot was taking measurements of the location only (the velocity is a hidden state variable). Therefore the measurement function is very simple - a matrix containing a one and a zero. This matrix demonstrates how to map the state to the observation, z .
This matrix, called the Measurement Function, is denoted H .
For the measurement update step, there are a few formulas. First, we calculate the measurement residual, y . The measurement residual is the difference between the measurement and the expected measurement based on the prediction (ie. we are comparing where the measurement tells us we are vs. where we think we are). The measurement residual will be used later on in a formula.
Next, it's time to consider the measurement noise, denoted R . This formula maps the state prediction covariance into the measurement space and adds the measurement noise. The result, S , will be used in a subsequent equation to calculate the Kalman Gain.
These equations need not be memorized, instead they can be referred to in text or implemented in code for use and reuse.
Kalman Gain
Next, we calculate the Kalman Gain, K. As you will see in the next equation, the Kalman Gain determines how much weight should be placed on the state prediction, and how much on the measurement update. It is an averaging factor that changes depending on the uncertainty of the state prediction and measurement update.
These equations may look complicated and intimidating, but they do nothing more than calculate an average factor. Let’s work through a quick example to gain a better understanding of this. Feel free to pause the video and follow along in your own notebook!
C2L2 A15 V3
INSTRUCTOR NOTE:
Note: The second equation on the left should use P' instead of P on the left side of the equation.
The last step in the Kalman Filter is to update the new state’s covariance using the Kalman Gain.
Kalman Filter Equations
These are the equations that implement the Kalman Filter in multiple dimensions.
State Prediction:
Measurement Update:
Calculation of Kalman Gain:
Calculation of Posterior State and Covariance:
The Kalman Filter can successfully recover from inaccurate initial estimates, but it is very important to estimate the noise parameters, Q and R, as accurately as possible - as they are used to determine which of the estimate or the measurement to believe more.
Programming Exercise
Now it’s your chance to code the multi-dimensional Kalman Filter in C++. The code below uses the C++
eigen
library to define matrices and easily compute their inverse and transpose. Check out the
eigen
library full documentation
here
and go through some of their examples. Here's a list of useful commands that you'll need while working on this quiz:
-
Initializing a 2x1 float matrix
K
:
MatrixXf K(2, 1)
; -
Inserting values to matrix
K
:
K << 0, 0
-
Computing the transpose of matrix
K
:
K.transpose()
-
Computing the inverse of matrix
K
:
K.inverse()
Start Quiz:
#include <iostream>
#include <math.h>
#include <tuple>
#include "Core" // Eigen Library
#include "LU" // Eigen Library
using namespace std;
using namespace Eigen;
float measurements[3] = { 1, 2, 3 };
tuple<MatrixXf, MatrixXf> kalman_filter(MatrixXf x, MatrixXf P, MatrixXf u, MatrixXf F, MatrixXf H, MatrixXf R, MatrixXf I)
{
for (int n = 0; n < sizeof(measurements) / sizeof(measurements[0]); n++) {
//****** TODO: Kalman-filter function********//
// Measurement Update
// Code the Measurement Update
// Initialize and Compute Z, y, S, K, x, and P
// Prediction
// Code the Prediction
// Compute x and P
}
return make_tuple(x, P);
}
int main()
{
MatrixXf x(2, 1);// Initial state (location and velocity)
x << 0,
0;
MatrixXf P(2, 2);//Initial Uncertainty
P << 100, 0,
0, 100;
MatrixXf u(2, 1);// External Motion
u << 0,
0;
MatrixXf F(2, 2);//Next State Function
F << 1, 1,
0, 1;
MatrixXf H(1, 2);//Measurement Function
H << 1,
0;
MatrixXf R(1, 1); //Measurement Uncertainty
R << 1;
MatrixXf I(2, 2);// Identity Matrix
I << 1, 0,
0, 1;
tie(x, P) = kalman_filter(x, P, u, F, H, R, I);
cout << "x= " << x << endl;
cout << "P= " << P << endl;
return 0;
}
#include <iostream>
#include <math.h>
#include <tuple>
#include "Core" // Eigen Library
#include "LU" // Eigen Library
using namespace std;
using namespace Eigen;
float measurements[3] = { 1, 2, 3 };
tuple<MatrixXf, MatrixXf> kalman_filter(MatrixXf x, MatrixXf P, MatrixXf u, MatrixXf F, MatrixXf H, MatrixXf R, MatrixXf I)
{
for (int n = 0; n < sizeof(measurements) / sizeof(measurements[0]); n++) {
// Measurement Update
MatrixXf Z(1, 1);
Z << measurements[n];
MatrixXf y(1, 1);
y << Z - (H * x);
MatrixXf S(1, 1);
S << H * P * H.transpose() + R;
MatrixXf K(2, 1);
K << P * H.transpose() * S.inverse();
x << x + (K * y);
P << (I - (K * H)) * P;
// Prediction
x << (F * x) + u;
P << F * P * F.transpose();
}
return make_tuple(x, P);
}
int main()
{
MatrixXf x(2, 1);// Initial state (location and velocity)
x << 0,
0;
MatrixXf P(2, 2);//Initial Uncertainty
P << 100, 0,
0, 100;
MatrixXf u(2, 1);// External Motion
u << 0,
0;
MatrixXf F(2, 2);//Next State Function
F << 1, 1,
0, 1;
MatrixXf H(1, 2);//Measurement Function
H << 1,
0;
MatrixXf R(1, 1); //Measurement Uncertainty
R << 1;
MatrixXf I(2, 2);// Identity Matrix
I << 1, 0,
0, 1;
tie(x, P) = kalman_filter(x, P, u, F, H, R, I);
cout << "x= " << x << endl;
cout << "P= " << P << endl;
return 0;
}